How to Escape Saddle Points Efficiently
ثبت نشده
چکیده
In order to prove the main theorem, we need to show that the algorithm will not be stuck at any point that either has a large gradient or is a saddle point. This idea is similar to previous works (e.g.(Ge et al., 2015)). We first state a standard Lemma that shows if the current gradient is large, then we make progress in function value. Lemma 12. Assume f(·) satisfies A1, then for gradient descent with stepsize η < 1` , we have: f(xt+1) ≤ f(xt)− η 2 ‖∇f(xt)‖
منابع مشابه
How to Escape Saddle Points Efficiently
This paper shows that a perturbed form of gradient descent converges to a second-order stationary point in a number iterations which depends only poly-logarithmically on dimension (i.e., it is almost “dimension-free”). The convergence rate of this procedure matches the well-known convergence rate of gradient descent to first-order stationary points, up to log factors. When all saddle points are...
متن کاملGradient Descent Can Take Exponential Time to Escape Saddle Points
Although gradient descent (GD) almost always escapes saddle points asymptotically [Lee et al., 2016], this paper shows that even with fairly natural random initialization schemes and non-pathological functions, GD can be significantly slowed down by saddle points, taking exponential time to escape. On the other hand, gradient descent with perturbations [Ge et al., 2015, Jin et al., 2017] is not...
متن کاملBasin constrained κ-dimer method for saddle point finding.
Within the harmonic approximation to transition state theory, the rate of escape from a reactant is calculated from local information at saddle points on the boundary of the state. The dimer minimum-mode following method can be used to find such saddle points. But as we show, dimer searches that are initiated from a reactant state of interest can converge to saddles that are not on the boundary...
متن کاملLocating and Characterizing the Stationary Points of the Extended Rosenbrock Function
Two variants of the extended Rosenbrock function are analyzed in order to find the stationary points. The first variant is shown to possess a single stationary point, the global minimum. The second variant has numerous stationary points for high dimensionality. A previously proposed method is shown to be numerically intractable, requiring arbitrary precision computation in many cases to enumera...
متن کاملEfficient approaches for escaping higher order saddle points in non-convex optimization
Local search heuristics for non-convex optimizations are popular in applied machine learning. However, in general it is hard to guarantee that such algorithms even converge to a local minimum, due to the existence of complicated saddle point structures in high dimensions. Many functions have degenerate saddle points such that the first and second order derivatives cannot distinguish them with l...
متن کامل